biologically plausible
A Defining Markov locality and relating it to p locality
Markov locality, which will use the language of Markov blankets. Markov blanket but not all blankets are boundaries. A Markov boundary can be thought of as the set of variables that'locally' communicate with the parameter Importantly, for Markov-locality to be of use, we would like the Markov boundaries of random variables in the model of interest to be unique. Assume all quantities are as in A.1, that the conditional independence relationships This proof relies on Lemma A.1, proved below. We wish to prove Eq. 2 Eq.
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.46)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.46)
- North America > Canada > Quebec > Montreal (0.14)
- North America > United States (0.14)
- Asia > Middle East > Jordan (0.04)
- Europe > France (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
- (2 more...)
- North America > Canada > Quebec > Montreal (0.14)
- North America > United States (0.14)
- Asia > Middle East > Jordan (0.04)
- Europe > France (0.04)
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (1.00)
- Information Technology > Artificial Intelligence > Cognitive Science (1.00)
- (2 more...)
A Defining Markov locality and relating it to p locality
Markov locality, which will use the language of Markov blankets. Markov blanket but not all blankets are boundaries. A Markov boundary can be thought of as the set of variables that'locally' communicate with the parameter Importantly, for Markov-locality to be of use, we would like the Markov boundaries of random variables in the model of interest to be unique. Assume all quantities are as in A.1, that the conditional independence relationships This proof relies on Lemma A.1, proved below. We wish to prove Eq. 2 Eq.
- Information Technology > Artificial Intelligence > Machine Learning > Neural Networks (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Undirected Networks > Markov Models (0.46)
- Information Technology > Artificial Intelligence > Representation & Reasoning > Uncertainty > Bayesian Inference (0.46)
- Information Technology > Artificial Intelligence > Machine Learning > Learning Graphical Models > Directed Networks > Bayesian Learning (0.46)
Review for NeurIPS paper: A simple normative network approximates local non-Hebbian learning in the cortex
Weaknesses: The empirical evaluation is one of the weakest aspects of the paper. The fact that this is done on only one, seemingly arbitrarily chosen, dataset diminishes the significance of the results. I would have liked to see evaluation on more standard datasets. There are some aspects of the biological mapping that may not be biologically plausible: - Only linear model are considered. In biology, pyramidal cells are known to have many non-linear effects.
2021 in review: unsupervised brain models
No longer tied to conventional publication venues with year-long turnaround times, our field is moving at record speed. As 2021 draws to a close, I wanted to take some time to zoom out and review a recent trend in neuro-AI, the move toward unsupervised learning to explain representations in different brain areasfootnote. One of the most robust findings in neuro-AI is that artificial neural networks trained to perform ecologically relevant tasks match single neurons and ensemble signals in the brain. The canonical example is the ventral stream, where DNNs trained for object recognition on ImageNet match representations in IT (Khaligh-Razavi & Kriegeskorte, 2014, Yamins et al. 2014). Supervised, task-optimized networks link two important forms of explanation: ecological relevance and accounting for neural activity.
Information Bottleneck-Based Hebbian Learning Rule Naturally Ties Working Memory and Synaptic Updates
Daruwalla, Kyle, Lipasti, Mikko
Artificial neural networks have successfully tackled a large variety of problems by training extremely deep networks via back-propagation. A direct application of back-propagation to spiking neural networks contains biologically implausible components, like the weight transport problem or separate inference and learning phases. Various methods address different components individually, but a complete solution remains intangible. Here, we take an alternate approach that avoids back-propagation and its associated issues entirely. Recent work in deep learning proposed independently training each layer of a network via the information bottleneck (IB). Subsequent studies noted that this layer-wise approach circumvents error propagation across layers, leading to a biologically plausible paradigm. Unfortunately, the IB is computed using a batch of samples. The prior work addresses this with a weight update that only uses two samples (the current and previous sample). Our work takes a different approach by decomposing the weight update into a local and global component. The local component is Hebbian and only depends on the current sample. The global component computes a layer-wise modulatory signal that depends on a batch of samples. We show that this modulatory signal can be learned by an auxiliary circuit with working memory (WM) like a reservoir. Thus, we can use batch sizes greater than two, and the batch size determines the required capacity of the WM. To the best of our knowledge, our rule is the first biologically plausible mechanism to directly couple synaptic updates with a WM of the task. We evaluate our rule on synthetic datasets and image classification datasets like MNIST, and we explore the effect of the WM capacity on learning performance. We hope our work is a first-step towards understanding the mechanistic role of memory in learning.
- North America > United States > Wisconsin > Dane County > Madison (0.14)
- Africa > Sudan (0.04)